Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Lipoxygenases (LOXs) are a family of metalloenzymes that oxidize polyunsaturated fatty acids producing cell-signaling hydroperoxides. Fungal LOXs have drawn interest because of their roles in plant and animal pathogenesis. A new subfamily of annotated fungal LOXs has been predicted. One of its unique structural features is the presence of a cysteine amino acid encoded at the invariant leucine clamp. Herein, we isolate three representatives of this LOX subfamily from recombinant expressions in both yeast and bacterial cultures. Metal analysis indicates that the proteins accommodate a mononuclear manganese ion center, similar to other eukaryotic LOXs, but have nominal LOX activity. The functional consequence of the non-conservative mutation is further explored using a Leu-to-Cys (L546C) variant of soybean lipoxygenase, a model plant orthologue. While this L546C variant has comparable structural integrity and metal content to the native enzyme, the variant is associated with a 50-fold decrease in the first-order rate constant. The presence of cysteine at 546, compared to leucine, alanine, or serine, also results in a distinctive kinetic lag phase and product inhibition. The collective data highlight that Cys encoded at the Leu clamp is detrimental to LOX activity. Potential biological functions of these annotated fungal LOXs are discussed.more » « lessFree, publicly-accessible full text available August 1, 2026
-
IntroductionThe ‘social brain hypothesis’ proposes that brain development (particularly primates) is driven by social complexity, more than group size. Yet, small insects with minute brains are capable of the most complex social organization in animals - which warrants further attention. Research has focused on highly eusocial hymenopterans with extreme caste specialization and very large colony sizes that have passed social evolutionary points of no return. However, facultatively social insects that form small colonies (< 20 individuals) are likely to provide greater insight on brain selection at the origin-point of social group living. MethodsWe undertake the first neurobiological investigation of the facultatively social allodapine bees (Apidae: Xylocopinae: Allodapini), an exploratory study comparing single- and multi-female colonies ofExoneura angophorae. Using volume as a proxy for neural investment, we measured mushroom body calyces, optic lobes, antennal lobes and whole brains of queens, workers, and single-females to test three theories associating brain development with behavior: social brain hypothesis; distributed cognition hypothesis; sensory environment hypothesis. ResultsMushroom bodies were reduced in subordinate workers, but did not differ between queens and single-females. Workers had larger optic lobes than queens, but did not differ from single-females. There were no differences in antennal lobes or whole brain volume. DiscussionSocial caste, rather than multi-female versus single-female nesting, influenced mushroom body volume in this allodapine bee – counter to both social brain and distributed cognition theories and in alignment with halictine and ceratinine bees that also form small facultatively social colonies. Optic lobe enhancement is likely a response to dietary niche requirements for extra-nidal foraging behavior – which may be a highly plastic trait capable of rapid transition among allodapine and ceratinine bees that conforms with ecological intelligence hypotheses. These broad volumetric trends require further investigations on the functional neural circuitry involved in the aforementioned environmental contexts.more » « lessFree, publicly-accessible full text available June 10, 2026
-
Free, publicly-accessible full text available September 23, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Free, publicly-accessible full text available February 1, 2026
-
A fundamental notion of distance between train and test distributions from the field of domain adaptation is discrepancy distance. While in general hard to compute, here we provide the first set of provably efficient algorithms for testing localized discrepancy distance, where discrepancy is computed with respect to a fixed output classifier. These results imply a broad set of new, efficient learning algorithms in the recently introduced model of Testable Learning with Distribution Shift (TDS learning) due to Klivans et al. (2023).Our approach generalizes and improves all prior work on TDS learning: (1) we obtain universal learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits. Our methods further extend to semi-parametric settings and imply the first positive results for low-dimensional convex sets. Additionally, we separate learning and testing phases and obtain algorithms that run in fully polynomial time at test time.more » « less
-
Abstract We present results from the Chandra X-ray Observatory Large Project (878 ks in 28 observations) of the Large Magellanic Cloud supernova remnant N132D. We measure the expansion of the forward shock in the bright southern rim to be over the ∼14.5 yr baseline, which corresponds to a velocity of 1620 ± 400 km s−1after accounting for several instrumental effects. We measure an expansion of and a shock velocity of 3840 ± 260 km s−1for two features in an apparent blowout region in the northeast. The emission-measure-weighted average temperature inferred from X-ray spectral fits to regions in the southern rim is 0.95 ± 0.17 keV, consistent with the electron temperature implied by the shock velocity after accounting for Coulomb equilibration and adiabatic expansion. In contrast, the emission-measure-weighted average temperature for the northeast region is 0.77 ± 0.04 keV, which is significantly lower than the value inferred from the shock velocity. We fit 1D evolutionary models for the shock in the southern rim and northeast region, using the measured radius and propagation velocity into constant density and power-law profile circumstellar media. We find good agreement with the age of ∼2500 yr derived from optical expansion measurements for explosion energies of 1.5–3.0 × 1051erg, ejecta masses of 2–6M⊙, and ambient medium densities of ∼0.33–0.66 amu cm−3in the south and ∼0.01–0.02 amu cm−3in the northeast assuming a constant density medium. These results are consistent with previous studies that suggested the progenitor of N132D was an energetic supernova that exploded into a preexisting cavity.more » « lessFree, publicly-accessible full text available October 29, 2026
-
Abstract Protein language models, like the popular ESM2, are widely used tools for extracting evolution-based protein representations and have achieved significant success on downstream biological tasks. Representations based on sequence and structure models, however, show significant performance differences depending on the downstream task. A major open problem is to obtain representations that best capture both the evolutionary and structural properties of proteins in general. Here we introduceImplicitStructureModel(ISM), a sequence-only input model with structurally-enriched representations that outperforms state-of-the-art sequence models on several well-studied benchmarks including mutation stability assessment and structure prediction. Our key innovations are a microenvironment-based autoencoder for generating structure tokens and a self-supervised training objective that distills these tokens into ESM2’s pre-trained model. We have madeISM’s structure-enriched weights easily available: integrating ISM into any application using ESM2 requires changing only a single line of code. Our code is available athttps://github.com/jozhang97/ISM.more » « less
-
We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution D, unlabeled samples from test distribution D’ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between D and D’. These distances, however, seem difficult to compute and do not lead to efficient algorithms. We depart from this paradigm and define a new model called testable learning with distribution shift, where we can obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution. In this model, a learner outputs a classifier with low test error whenever samples from D and D’ pass an associated test; moreover, the test must accept (with high probability) if the marginal of D equals the marginal of D’. We give several positive results for learning well-studied concept classes such as halfspaces, intersections of halfspaces, and decision trees when the marginal of D is Gaussian or uniform on the hypercube. Prior to our work, no efficient algorithms for these basic cases were known without strong assumptions on D’. For halfspaces in the realizable case (where there exists a halfspace consistent with both D and D’), we combine a moment-matching approach with ideas from active learning to simulate an efficient oracle for estimating disagreement regions. To extend to the non-realizable setting, we apply recent work from testable (agnostic) learning. More generally, we prove that any function class with low-degree L2-sandwiching polynomial approximators can be learned in our model. Since we require L2- sandwiching (instead of the usual L1 loss), we cannot directly appeal to convex duality and instead apply constructions from the pseudorandomness literature to obtain the required approximators. We also provide lower bounds to show that the guarantees we obtain on the performance of our output hypotheses are best possible up to constant factors, as well as a separation showing that realizable learning in our model is incomparable to (ordinary) agnostic learning.more » « less
An official website of the United States government

Full Text Available